273 research outputs found

    Deep Learning-Enabled Text Semantic Communication under Interference: An Empirical Study

    Full text link
    At the confluence of 6G, deep learning (DL), and natural language processing (NLP), DL-enabled text semantic communication (SemCom) has emerged as a 6G enabler by promising to minimize bandwidth consumption, transmission delay, and power usage. Among text SemCom techniques, \textit{DeepSC} is a popular scheme that leverages advancements in DL and NLP to reliably transmit semantic information in low signal-to-noise ratio (SNR) regimes. To understand the fundamental limits of such a transmission paradigm, our recently developed theory \cite{Getu'23_Performance_Limits} predicted the performance limits of DeepSC under radio frequency interference (RFI). Although these limits were corroborated by simulations, trained deep networks can defy classical statistical wisdom, and hence extensive computer experiments are needed to validate our theory. Accordingly, this empirical work follows concerning the training and testing of DeepSC using the proceedings of the European Parliament (Europarl) dataset. Employing training, validation, and testing sets \textit{tokenized and vectorized} from Europarl, we train the DeepSC architecture in Keras 2.9 with TensorFlow 2.9 as a backend and test it under Gaussian multi-interferer RFI received over Rayleigh fading channels. Validating our theory, the testing results corroborate that DeepSC produces semantically irrelevant sentences as the number of Gaussian RFI emitters gets very large. Therefore, a fundamental 6G design paradigm for \textit{interference-resistant and robust SemCom} (IR2^2 SemCom) is needed

    Performance Limits of a Deep Learning-Enabled Text Semantic Communication under Interference

    Full text link
    A deep learning (DL)-enabled semantic communication (SemCom) has emerged as a 6G enabler while promising to minimize power usage, bandwidth consumption, and transmission delay by minimizing irrelevant information transmission. However, the benefits of such a semantic-centric design can be limited by radio frequency interference (RFI) that causes substantial semantic noise. The impact of semantic noise due to interference can be alleviated using an interference-resistant and robust (IR2^2) SemCom design. Nevertheless, no such design exists yet. To shed light on this knowledge gap and stimulate fundamental research on IR2^2 SemCom, the performance limits of a text SemCom system named DeepSC are studied in the presence of (multi-interferer) RFI. By introducing a principled probabilistic framework for SemCom, we show that DeepSC produces semantically irrelevant sentences as the power of (multi-interferer) RFI gets very large. We also derive DeepSC's practical limits and a lower bound on its outage probability under multi-interferer RFI. Toward a fundamental 6G design for an IR2^2 SemCom, moreover, we propose a generic lifelong DL-based IR2^2 SemCom system. Eventually, we corroborate the derived performance limits with Monte Carlo simulations and computer experiments, which also affirm the vulnerability of DeepSC and DL-enabled text SemCom to a wireless attack using RFI

    Terahertz-Band Integrated Sensing and Communications: Challenges and Opportunities

    Full text link
    The sixth generation (6G) wireless networks aim to achieve ultra-high data transmission rates, very low latency and enhanced energy-efficiency. To this end, terahertz (THz) band is one of the key enablers of 6G to meet such requirements. The THz-band systems are also quickly merging as high-resolution sensing devices because of their ultra-wide bandwidth and very narrow beamwidth. As a means to efficiently utilize spectrum and thereby save cost and power, THz integrated sensing and communications (ISAC) paradigm envisages a single integrated hardware platform with common signaling mechanism. However, ISAC at THz-band entails several design challenges such as beam split, range-dependent bandwidth, near-field beamforming, and distinct channel model. This article examines the technologies that have the potential to bring forth ISAC and THz transmission together. In particular, it provides an overview of antenna and array design, hybrid beamforming, integration with reflecting surfaces and data-driven techniques such as machine learning. These systems also provide research opportunities in developing novel methodologies for channel estimation, near-field beam split, waveform design and beam misalignment.Comment: 7pages, submitted to IEE

    CBRS Spectrum Sharing between LTE-U and WiFi: A Multiarmed Bandit Approach

    Get PDF
    The surge of mobile devices such as smartphone and tablets requires additional capacity. To achieve ubiquitous and high data rate Internet connectivity, effective spectrum sharing and utilization of the wireless spectrum carry critical importance. In this paper, we consider the use of unlicensed LTE (LTE-U) technology in the 3.5 GHz Citizens Broadband Radio Service (CBRS) band and develop a multiarmed bandit (MAB) based spectrum sharing technique for a smooth coexistence with WiFi. In particular, we consider LTE-U to operate as a General Authorized Access (GAA) user; hereby MAB is used to adaptively optimize the transmission duty cycle of LTE-U transmissions. Additionally, we incorporate downlink power control which yields a high energy efficiency and interference suppression. Simulation results demonstrate a significant improvement in the aggregate capacity (approximately 33%) and cell-edge throughput of coexisting LTE-U and WiFi networks for different base station densities and user densities

    On Minimizing Energy Consumption for D2D Clustered Caching Networks

    Get PDF
    We formulate and solve the energy minimization problem for a clustered device-to-device (D2D) network with cache-enabled mobile devices. Devices are distributed according to a Poisson cluster process (PCP) and are assumed to have a surplus memory which is exploited to proactively cache files from a library. Devices can retrieve the requested files from their caches, from neighboring devices in their proximity (cluster), or from the base station as a last resort. We minimize the energy consumption of the proposed network under a random probabilistic caching scheme, where files are independently cached according to a specific probability distribution. A closed form expression for the D2D coverage probability is obtained. The energy consumption problem is then formulated as a function of the caching distribution, and the optimal probabilistic caching distribution is obtained. Results reveal that the proposed caching distribution reduces energy consumption up to 33% as compared to caching popular files scheme

    Peri-operative red blood cell transfusion in neonates and infants: NEonate and Children audiT of Anaesthesia pRactice IN Europe: A prospective European multicentre observational study

    Get PDF
    BACKGROUND: Little is known about current clinical practice concerning peri-operative red blood cell transfusion in neonates and small infants. Guidelines suggest transfusions based on haemoglobin thresholds ranging from 8.5 to 12 g dl-1, distinguishing between children from birth to day 7 (week 1), from day 8 to day 14 (week 2) or from day 15 (≄week 3) onwards. OBJECTIVE: To observe peri-operative red blood cell transfusion practice according to guidelines in relation to patient outcome. DESIGN: A multicentre observational study. SETTING: The NEonate-Children sTudy of Anaesthesia pRactice IN Europe (NECTARINE) trial recruited patients up to 60 weeks' postmenstrual age undergoing anaesthesia for surgical or diagnostic procedures from 165 centres in 31 European countries between March 2016 and January 2017. PATIENTS: The data included 5609 patients undergoing 6542 procedures. Inclusion criteria was a peri-operative red blood cell transfusion. MAIN OUTCOME MEASURES: The primary endpoint was the haemoglobin level triggering a transfusion for neonates in week 1, week 2 and week 3. Secondary endpoints were transfusion volumes, 'delta haemoglobin' (preprocedure - transfusion-triggering) and 30-day and 90-day morbidity and mortality. RESULTS: Peri-operative red blood cell transfusions were recorded during 447 procedures (6.9%). The median haemoglobin levels triggering a transfusion were 9.6 [IQR 8.7 to 10.9] g dl-1 for neonates in week 1, 9.6 [7.7 to 10.4] g dl-1 in week 2 and 8.0 [7.3 to 9.0] g dl-1 in week 3. The median transfusion volume was 17.1 [11.1 to 26.4] ml kg-1 with a median delta haemoglobin of 1.8 [0.0 to 3.6] g dl-1. Thirty-day morbidity was 47.8% with an overall mortality of 11.3%. CONCLUSIONS: Results indicate lower transfusion-triggering haemoglobin thresholds in clinical practice than suggested by current guidelines. The high morbidity and mortality of this NECTARINE sub-cohort calls for investigative action and evidence-based guidelines addressing peri-operative red blood cell transfusions strategies. TRIAL REGISTRATION: ClinicalTrials.gov, identifier: NCT02350348

    Decentralized Cross-Tier Interference Mitigation in Cognitive Femtocell Networks

    No full text
    International audienceIn this paper, recent results in game theory and stochastic approximation are brought together to mitigate the problem of femto-to-macrocell cross-tier interference. The main result of this paper is an algorithm which reduces the impact of interference of femtocells over the existing macrocells. Such algorithm relies on the observations of the signal to interference plus noise ratio (SINR) of all active communications in both macro and femtocells when they are fed back to the corresponding base stations. Based on such observations, femto base stations learn the probability distributions over the feasible transmit configurations (frequency band and power levels) such that a minimum time-average SINR can be guaranteed in the macrocells, at the equilibrium. In this paper, we introduce the concept of logit equilibrium (LE) and present its interpretation in terms of the trade-off faced by femtocells when experimenting several actions to discover the network, and taking the action to maximize their instantaneous performance. Finally, numerical results are given to validate our theoretical findings
    • 

    corecore